10 research outputs found

    Performance Analysis of Security Protocols

    Get PDF
    Security is critical to a wide range of applications and services. Numerous security mechanisms and protocols have been developed and are widely used with today’s Internet. These protocols, which provide secrecy, authentication, and integrity control, are essential to protecting electronic information. There are many types of security protocols and mechanisms, such as symmetric key algorithms, asymmetric key algorithms, message digests, digital certificates, and secure socket layer (SSL) communication. Symmetric and asymmetric key algorithms provide secrecy. Message digests are used for authentication. SSL communication provides a secure connection between two sockets. The purpose of this graduate project was to do performance analysis on various security protocols. These are performance comparisons of symmetric key algorithms DES (Data Encryption Standard), 3DES (Triple DES), AES (Advanced Encryption Standard), and RC4; of public-private key algorithms RSA and ElGamal; of digital certificates using message digests SHA1 (Secure Hash Algorithm) and MD5; and of SSL (Secure Sockets Layer) communication using security algorithms 3DES with SHA1 and RC4 with MD5

    Collaborative Inference in DNN-based Satellite Systems with Dynamic Task Streams

    Full text link
    As a driving force in the advancement of intelligent in-orbit applications, DNN models have been gradually integrated into satellites, producing daily latency-constraint and computation-intensive tasks. However, the substantial computation capability of DNN models, coupled with the instability of the satellite-ground link, pose significant challenges, hindering timely completion of tasks. It becomes necessary to adapt to task stream changes when dealing with tasks requiring latency guarantees, such as dynamic observation tasks on the satellites. To this end, we consider a system model for a collaborative inference system with latency constraints, leveraging the multi-exit and model partition technology. To address this, we propose an algorithm, which is tailored to effectively address the trade-off between task completion and maintaining satisfactory task accuracy by dynamically choosing early-exit and partition points. Simulation evaluations show that our proposed algorithm significantly outperforms baseline algorithms across the task stream with strict latency constraints

    Federated Domain Generalization: A Survey

    Full text link
    Machine learning typically relies on the assumption that training and testing distributions are identical and that data is centrally stored for training and testing. However, in real-world scenarios, distributions may differ significantly and data is often distributed across different devices, organizations, or edge nodes. Consequently, it is imperative to develop models that can effectively generalize to unseen distributions where data is distributed across different domains. In response to this challenge, there has been a surge of interest in federated domain generalization (FDG) in recent years. FDG combines the strengths of federated learning (FL) and domain generalization (DG) techniques to enable multiple source domains to collaboratively learn a model capable of directly generalizing to unseen domains while preserving data privacy. However, generalizing the federated model under domain shifts is a technically challenging problem that has received scant attention in the research area so far. This paper presents the first survey of recent advances in this area. Initially, we discuss the development process from traditional machine learning to domain adaptation and domain generalization, leading to FDG as well as provide the corresponding formal definition. Then, we categorize recent methodologies into four classes: federated domain alignment, data manipulation, learning strategies, and aggregation optimization, and present suitable algorithms in detail for each category. Next, we introduce commonly used datasets, applications, evaluations, and benchmarks. Finally, we conclude this survey by providing some potential research topics for the future

    EDGF: Empirical dataset generation framework for wireless network networks

    Full text link
    In wireless sensor networks (WSNs), simulation practices, system models, algorithms, and protocols have been published worldwide based on the assumption of randomness. The applied statistics used for randomness in WSNs are broad in nature, e.g., random deployment, activity tracking, packet generation, etc. Even though with adequate formal and informal information provided and pledge by authors, validation of the proposal became a challenging issue. The minuscule information alteration in implementation and validation can reflect the enormous effect on eventual results. In this proposal, we show how the results are affected by the generalized assumption made on randomness. In sensor node deployment, ambiguity arises due to node error-value (ϵ\epsilon), and it's upper bound in the relative position is estimated to understand the delicacy of diminutives changes. Moreover, the effect of uniformity in the traffic and contribution of scheduling position of nodes also generalized. We propose an algorithm to generate the unified dataset for the general and some specific applications system models in WSNs. The results produced by our algorithm reflects the pseudo-randomness and can efficiently regenerate through seed value for validation

    An Ensemble Learning Approach for Reversible Data Hiding in Encrypted Images with Fibonacci Transform

    No full text
    Reversible data hiding (RDH) is an active area of research in the field of information security. In RDH, a secret can be embedded inside a cover medium. Unlike other data-hiding schemes, RDH becomes important in applications that demand recovery of the cover without any deformation, along with recovery of the hidden secret. In this paper, a new RDH scheme is proposed for performing reversible data hiding in encrypted images using a Fibonacci transform with an ensemble learning method. In the proposed scheme, the data hider encrypts the original image and performs further data hiding. During data hiding, the encrypted image is partitioned into non-overlapping blocks, with each block considered one-by-one. The selected block undergoes a series of Fibonacci transforms during data hiding. The number of Fibonacci transforms required on a selected block is determined by the integer value that the data hider wants to embed. On the receiver side, message extraction and image restoration are performed with the help of the ensemble learning method. The receiver will try to perform all possible Fibonacci transforms and decrypt the blocks. The recovered block is identified with the help of trained machine-learning models. The novelty of the scheme lies in (1) retaining the encrypted pixel intensities unaltered while hiding the data. Almost every RDH scheme described in the literature alters the encrypted pixel intensities to embed the data, which represents a security concern for the encryption algorithm; (2) Introducing an efficient means of recovery through an ensemble model framework. The majority of votes from the different trained models guarantee the correct recovery of the cover image. The proposed scheme enables reduction in the bit error rate during message extraction and contributes to ensuring the suitability of the scheme in areas such as medical image transmission and cloud computing. The results obtained from experiments undertaken show that the proposed RDH scheme was able to attain an improved payload capacity of 0.0625 bits per pixel, outperforming many related RDH schemes with complete reversibility

    Joint Beamforming, Power Allocation, and Splitting Control for SWIPT-Enabled IoT Networks with Deep Reinforcement Learning and Game Theory

    No full text
    Future wireless networks promise immense increases on data rate and energy efficiency while overcoming the difficulties of charging the wireless stations or devices in the Internet of Things (IoT) with the capability of simultaneous wireless information and power transfer (SWIPT). For such networks, jointly optimizing beamforming, power control, and energy harvesting to enhance the communication performance from the base stations (BSs) (or access points (APs)) to the mobile nodes (MNs) served would be a real challenge. In this work, we formulate the joint optimization as a mixed integer nonlinear programming (MINLP) problem, which can be also realized as a complex multiple resource allocation (MRA) optimization problem subject to different allocation constraints. By means of deep reinforcement learning to estimate future rewards of actions based on the reported information from the users served by the networks, we introduce single-layer MRA algorithms based on deep Q-learning (DQN) and deep deterministic policy gradient (DDPG), respectively, as the basis for the downlink wireless transmissions. Moreover, by incorporating the capability of data-driven DQN technique and the strength of noncooperative game theory model, we propose a two-layer iterative approach to resolve the NP-hard MRA problem, which can further improve the communication performance in terms of data rate, energy harvesting, and power consumption. For the two-layer approach, we also introduce a pricing strategy for BSs or APs to determine their power costs on the basis of social utility maximization to control the transmit power. Finally, with the simulated environment based on realistic wireless networks, our numerical results show that the two-layer MRA algorithm proposed can achieve up to 2.3 times higher value than the single-layer counterparts which represent the data-driven deep reinforcement learning-based algorithms extended to resolve the problem, in terms of the utilities designed to reflect the trade-off among the performance metrics considered

    Joint Data Transmission and Energy Harvesting for MISO Downlink Transmission Coordination in Wireless IoT Networks

    No full text
    The advent of simultaneous wireless information and power (SWIPT) has been regarded as a promising technique to provide power supplies for an energy sustainable Internet of Things (IoT), which is of paramount importance due to the proliferation of high data communication demands of low-power network devices. In such networks, a multi-antenna base station (BS) in each cell can be utilized to concurrently transmit messages and energies to its intended IoT user equipment (IoT-UE) with a single antenna under a common broadcast frequency band, resulting in a multi-cell multi-input single-output (MISO) interference channel (IC). In this work, we aim to find the trade-off between the spectrum efficiency (SE) and energy harvesting (EH) in SWIPT-enabled networks with MISO ICs. For this, we derive a multi-objective optimization (MOO) formulation to obtain the optimal beamforming pattern (BP) and power splitting ratio (PR), and we propose a fractional programming (FP) model to find the solution. To tackle the nonconvexity of FP, an evolutionary algorithm (EA)-aided quadratic transform technique is proposed, which recasts the nonconvex problem as a sequence of convex problems to be solved iteratively. To further reduce the communication overhead and computational complexity, a distributed multi-agent learning-based approach is proposed that requires only partial observations of the channel state information (CSI). In this approach, each BS is equipped with a double deep Q network (DDQN) to determine the BP and PR for its UE with lower computational complexity based on the observations through a limited information exchange process. Finally, with the simulation experiments, we verify the trade-off between SE and EH, and we demonstrate that, apart from the FP algorithm introduced to provide superior solutions, the proposed DDQN algorithm also shows its performance gain in terms of utility to be up to 1.23-, 1.87-, and 3.45-times larger than the Advantage Actor Critic (A2C), greedy, and random algorithms, respectively, in comparison in the simulated environment

    Stochastic Modeling for Intelligent Software-Defined Vehicular Networks: A Survey

    Get PDF
    Digital twins and the Internet of Things (IoT) have gained significant research attention in recent years due to their potential advantages in various domains, and vehicular ad hoc networks (VANETs) are one such application. VANETs can provide a wide range of services for passengers and drivers, including safety, convenience, and information. The dynamic nature of these environments poses several challenges, including intermittent connectivity, quality of service (QoS), and heterogeneous applications. Combining intelligent technologies and software-defined networking (SDN) with VANETs (termed intelligent software-defined vehicular networks (iSDVNs)) meets these challenges. In this context, several types of research have been published, and we summarize their benefits and limitations. We also aim to survey stochastic modeling and performance analysis for iSDVNs and the uses of machine-learning algorithms through digital twin networks (DTNs), which are also part of iSDVNs. We first present a taxonomy of SDVN architectures based on their modes of operation. Next, we survey and classify the state-of-the-art iSDVN routing protocols, stochastic computations, and resource allocations. The evolution of SDN causes its complexity to increase, posing a significant challenge to efficient network management. Digital twins offer a promising solution to address these challenges. This paper explores the relationship between digital twins and SDN and also proposes a novel approach to improve network management in SDN environments by increasing digital twin capabilities. We analyze the pitfalls of these state-of-the-art iSDVN protocols and compare them using tables. Finally, we summarize several challenges faced by current iSDVNs and possible future directions to make iSDVNs autonomous

    Blockchain for Internet of Underwater Things: State-of-the-Art, Applications, Challenges, and Future Directions

    No full text
    The Internet of Underwater Things (IoUT) has become widely popular in the past decade as it has huge prospects for the economy due to its applicability in various use cases such as environmental monitoring, disaster management, localization, defense, underwater exploration, and so on. However, each of these use cases poses specific challenges with respect to security, privacy, transparency, and traceability, which can be addressed by the integration of blockchain with the IoUT. Blockchain is a Distributed Ledger Technology (DLT) that consists of series of blocks chained up in chronological order in a distributed network. In this paper, we present a first-of-its-kind survey on the integration of blockchain with the IoUT. This paper initially discusses the blockchain technology and the IoUT and points out the benefits of integrating blockchain technology with IoUT systems. An overview of various applications, the respective challenges, and the possible future directions of blockchain-enabled IoUT systems is also presented in this survey, and finally, the work sheds light on the critical aspects of IoUT systems and will enable researchers to address the challenges using blockchain technology
    corecore